发现采用时间分离技术(TST)的基于模型的重建可以使用C臂锥束计算机断层扫描(CBCT)改善肝脏的动态灌注成像。要使用从CT灌注数据中提取的先验知识应用TST,应从CT扫描中准确分割肝脏。需要对主要和基于模型的CBCT数据进行重建,以正确可视化和解释灌注图。这项研究提出了Turbolift Learning,该学习按照培训CT,CBCT,CBCT,CBCT TST的顺序训练多尺度关注的多尺度注意力,UNET串行序列上的不同肝脏细分任务 - 使先前的培训作为前培训作为预训练阶段的阶段随后的问题 - 解决培训数据集数量有限的问题。对于CBCT TST的肝脏分割的最终任务,提议的方法的总骰子得分为0.874 $ \ pm $ 0.031和0.905 $ \ pm $ \ $ \ $ 0.007,分别为6倍和4倍的交叉验证实验 - 获得统计上显着的改进 - 在模型上,该模型仅接受该任务。实验表明,涡轮增压不仅提高了模型的整体性能,而且还使其与源自栓塞材料和截断物品的人工制品具有稳健性。此外,深入分析确认了分割任务的顺序。本文显示了从CT,CBCT和CBCT TST分割肝脏的潜力,从可用的有限培训数据中学习,将来可能会用于可视化和评估灌注图的肝病评估。 。
translated by 谷歌翻译
磁共振大脑图像中的运动伪影是一个关键问题。在进行临床诊断之前,对MR图像质量的评估是基本的。如果运动伪像改变了对大脑,病变,肿瘤等结构和子结构的正确描述,则需要重新扫描患者。否则,神经放射学家可能会报告不准确或不正确的诊断。扫描患者后的第一步是“ \ textit {图像质量评估}”,以确定获得的图像是否可以诊断出来。这里已经提出了基于残留神经网络的结构相似性指数(SSIM)回归的自动图像质量评估,并可能通过用SSIM范围对不同组进行分类。在没有参考地面真实图像的情况下,该方法预测输入图像的SSIM值。这些网络能够检测运动工件,并且通过RESNET-18和对比度增强,回归和分类任务的最佳性能始终是实现的。残差分布的平均值和标准偏差分别为$ \ mu = -0.0009 $和$ \ sigma = 0.0139 $。在3、5和10类中的分类任务中,最佳精度分别为97、95和89 \%。获得的结果表明,所提出的方法可以是支持神经放射学家和射线照相仪在诊断前评估图像质量的工具。
translated by 谷歌翻译
深度学习模型显示了它们对多种应用的潜力。但是,大多数模型由于其复杂的推理而难以信任,通常被称为黑盒问题。一些领域,例如医学,需要高度透明度来接受和采用此类技术。因此,需要在分类器上创建可解释/可解释的模型或应用事后方法以在深度学习模型中建立信任。此外,深度学习方法可用于分割任务,这些任务通常需要难以实现的,耗时的手动淘汰分段标签进行培训。本文介绍了三个固有的可解释的分类器,以解决这两个问题。网络提供的本地化热图(代表模型的焦点区域并用于分类决策)可以直接解释,而无需任何事后方法来推导信息以进行模型说明。模型是通过使用输入图像的训练,仅以监督的方式将分类标签作为地面真相进行培训 - 无需使用有关感兴趣区域位置的任何信息(即细分标签),从而使模型的细分训练变得薄弱。 - 通过分类标签进行监管。最终的分割是通过阈值这些热图获得的。这些模型用于使用两个不同数据集进行多级脑肿瘤分类的任务,从而使监督分类任务的最佳F1得分为0.93,同时确保了弱点诉讼的0.67 $ \ pm 0.08 $ \ pm 0.08 $ \ pm。细分任务。此外,仅在肿瘤图像子集上获得的准确性优于最先进的神经胶质瘤肿瘤分级分类器,其最佳模型达到了98.7 \%的精度。
translated by 谷歌翻译
人脑解剖图像的专家解释是神经放射学的中心部分。已经提出了几种基于机器学习的技术来协助分析过程。但是,通常需要对ML模型进行培训以执行特定的任务,例如脑肿瘤分割或分类。相应的培训数据不仅需要费力的手动注释,而且人脑MRI中可以存在多种异常 - 甚至同时发生,这使得所有可能的异常情况都非常具有挑战性。因此,可能的解决方案是一种无监督的异常检测(UAD)系统,可以从健康受试者的未标记数据集中学习数据分布,然后应用以检测​​分布样本。然后,这种技术可用于检测异常 - 病变或异常,例如脑肿瘤,而无需明确训练该特定病理的模型。过去已经为此任务提出了几种基于变异的自动编码器(VAE)技术。即使它们在人为模拟的异常情况下表现良好,但其中许多在检测临床数据中的异常情况下表现较差。这项研究提出了“上下文编码” VAE(CEVAE)模型的紧凑版本,并结合了预处理和后处理步骤,创建了UAD管道(Strega)(Strega),该步骤对临床数据更强大,并显示其在检测到其检测方面的适用性脑MRI中的肿瘤等异常。 The proposed pipeline achieved a Dice score of 0.642$\pm$0.101 while detecting tumours in T2w images of the BraTS dataset and 0.859$\pm$0.112 while detecting artificially induced anomalies, while the best performing baseline achieved 0.522$\pm$0.135 and 0.783$\ PM分别为0.111美元。
translated by 谷歌翻译
CT和MRI是两种广泛使用的临床成像方式,用于非侵入性诊断。然而,这两种方式都有一定的问题。 CT使用有害电离辐射,MRI患有缓慢的采集速度。欠采样可以解决这两个问题,例如稀疏抽样。然而,这种向下采样的数据导致降低分辨率并引入人工制品。已经提出了几种技术,包括基于深度的学习方法,以重建此类数据。然而,这两个方式的欠采样重建问题总是被认为是两个不同的问题,并通过不同的研究工作分开解决。本文通过在径向MRI上应用傅立叶变换的预处理来实现稀疏CT和缺口MRI重建的统一解决方案,然后使用SCOMAGE ups采样与滤波后投影结合使用SCOMAGE Cups采样来实现的基于傅里叶变换的预处理。原始网络是一种基于深度学习的方法,用于重建稀疏采样的CT数据。本文介绍了原始 - 双工UNET,从精度和重建速度方面提高了原始双网络。所提出的方法导致平均SSSIM为0.932,同时对风扇束几何进行稀疏CT重建,其稀疏水平为16,实现了对先前模型的统计上显着的改进,这导致0.919。此外,所提出的模型导致0.903和0.957平均SSIM,同时重建具有16-统计上显着改善的加速因子,在原始模型上重建了缺乏采样的脑和腹部MRI数据,这导致0.867和0.949。最后,本文表明,所提出的网络不仅提高了整体图像质量,而且还提高了兴趣区域的图像质量;以及在针的存在下更好地推广。
translated by 谷歌翻译
大脑的血管为人脑提供所需的营养和氧气。作为大脑血液供应的脆弱部分,小血管的病理可能会引起严重的问题,例如脑小血管疾病(CSVD)。还显示CSVD与神经变性有关,例如阿尔茨海默氏病。随着7个特斯拉MRI系统的发展,可以实现较高的空间图像分辨率,从而使大脑中非常小的血管描绘。非深度学习的方法进行血管分割的方法,例如,弗兰吉的血管增强,随后的阈值能够将培养基分割至大容器,但通常无法分割小血管。这些方法对小容器的敏感性可以通过广泛的参数调整或手动校正来提高,尽管使它们耗时,费力,并且对于较大的数据集而言是不可行的。本文提出了一个深度学习架构,以自动在7特斯拉3D飞行时间(TOF)磁共振血管造影(MRA)数据中自动分割小血管。该算法对仅11个受试者的小型半自动分段数据进行训练和评估;使用六个进行培训,两个进行验证,三个进行测试。基于U-NET多尺度监督的深度学习模型使用训练子集进行了训练,并以一种自我监督的方式使用变形 - 意识到的学习以改善概括性能。针对测试集对拟议的技术进行了定量和定性评估,并获得了80.44 $ \ pm $ 0.83的骰子得分。此外,将所提出的方法的结果与选定的手动分割区域(62.07结果骰子)进行了比较,并通过变形感知的学习显示出显着改善(18.98 \%)。
translated by 谷歌翻译
Search and Rescue (SAR) missions in remote environments often employ autonomous multi-robot systems that learn, plan, and execute a combination of local single-robot control actions, group primitives, and global mission-oriented coordination and collaboration. Often, SAR coordination strategies are manually designed by human experts who can remotely control the multi-robot system and enable semi-autonomous operations. However, in remote environments where connectivity is limited and human intervention is often not possible, decentralized collaboration strategies are needed for fully-autonomous operations. Nevertheless, decentralized coordination may be ineffective in adversarial environments due to sensor noise, actuation faults, or manipulation of inter-agent communication data. In this paper, we propose an algorithmic approach based on adversarial multi-agent reinforcement learning (MARL) that allows robots to efficiently coordinate their strategies in the presence of adversarial inter-agent communications. In our setup, the objective of the multi-robot team is to discover targets strategically in an obstacle-strewn geographical area by minimizing the average time needed to find the targets. It is assumed that the robots have no prior knowledge of the target locations, and they can interact with only a subset of neighboring robots at any time. Based on the centralized training with decentralized execution (CTDE) paradigm in MARL, we utilize a hierarchical meta-learning framework to learn dynamic team-coordination modalities and discover emergent team behavior under complex cooperative-competitive scenarios. The effectiveness of our approach is demonstrated on a collection of prototype grid-world environments with different specifications of benign and adversarial agents, target locations, and agent rewards.
translated by 谷歌翻译
Accurate and robust extrinsic calibration is necessary for deploying autonomous systems which need multiple sensors for perception. In this paper, we present a robust system for real-time extrinsic calibration of multiple lidars in vehicle base frame without the need for any fiducial markers or features. We base our approach on matching absolute GNSS and estimated lidar poses in real-time. Comparing rotation components allows us to improve the robustness of the solution than traditional least-square approach comparing translation components only. Additionally, instead of comparing all corresponding poses, we select poses comprising maximum mutual information based on our novel observability criteria. This allows us to identify a subset of the poses helpful for real-time calibration. We also provide stopping criteria for ensuring calibration completion. To validate our approach extensive tests were carried out on data collected using Scania test vehicles (7 sequences for a total of ~ 6.5 Km). The results presented in this paper show that our approach is able to accurately determine the extrinsic calibration for various combinations of sensor setups.
translated by 谷歌翻译
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs). Quantization is a technique for making neural networks more efficient by running them using low-bit integer arithmetic and is therefore commonly adopted in industry. Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization, and certification of the quantized representation is necessary to guarantee robustness. In this work, we present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs. Inspired by advances in robust learning of non-quantized networks, our training algorithm computes the gradient of an abstract representation of the actual network. Unlike existing approaches, our method can handle the discrete semantics of QNNs. Based on QA-IBP, we also develop a complete verification procedure for verifying the adversarial robustness of QNNs, which is guaranteed to terminate and produce a correct answer. Compared to existing approaches, the key advantage of our verification procedure is that it runs entirely on GPU or other accelerator devices. We demonstrate experimentally that our approach significantly outperforms existing methods and establish the new state-of-the-art for training and certifying the robustness of QNNs.
translated by 谷歌翻译
While the NLP community is generally aware of resource disparities among languages, we lack research that quantifies the extent and types of such disparity. Prior surveys estimating the availability of resources based on the number of datasets can be misleading as dataset quality varies: many datasets are automatically induced or translated from English data. To provide a more comprehensive picture of language resources, we examine the characteristics of 156 publicly available NLP datasets. We manually annotate how they are created, including input text and label sources and tools used to build them, and what they study, tasks they address and motivations for their creation. After quantifying the qualitative NLP resource gap across languages, we discuss how to improve data collection in low-resource languages. We survey language-proficient NLP researchers and crowd workers per language, finding that their estimated availability correlates with dataset availability. Through crowdsourcing experiments, we identify strategies for collecting high-quality multilingual data on the Mechanical Turk platform. We conclude by making macro and micro-level suggestions to the NLP community and individual researchers for future multilingual data development.
translated by 谷歌翻译